-
Notifications
You must be signed in to change notification settings - Fork 277
Mixer Scaler #1676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mixer Scaler #1676
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## scaling_toolbox #1676 +/- ##
===================================================
- Coverage 77.21% 77.21% -0.01%
===================================================
Files 395 395
Lines 63859 63923 +64
Branches 10606 10627 +21
===================================================
+ Hits 49306 49355 +49
- Misses 12065 12079 +14
- Partials 2488 2489 +1 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the only significant changes are taking place in the mixer.py and test_mixer.py files, right? I'm assuming the rest is carryover from #1673
| for var in model.conc_mass_comp.values(): | ||
| self.scale_variable_by_default(var, overwrite=overwrite) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if you do this, but one or more of the components doesn't have a default scaling factor. Would this still run successfully? If so, what scaling factor gets set for the unspecified component? If this fails, how does it fail? Warning/error message and, if so, is this tested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It raises an exception:
raise ValueError(
"This scaler requires the user to provide a default "
f"scaling factor for {component}, but no default scaling "
"factor was set."
)
| self[k].hso4_dissociation.deactivate() | ||
|
|
||
|
|
||
| class LeachSolutionScaler(CustomScalerBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this being built in this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We needed a test case with inherent reactions. I considered pulling both this scaler and the parameter block it's designed to scale out of this test file, but I saw that it had non-SI base units and didn't want to spread that further in the IDAES code base.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, could you just leave an in-line comment/TODO note stating what you've said here.
| @pytest.mark.component | ||
| @pytest.mark.solver | ||
| def test_no_numerical_warnings(self, model): | ||
| # TODO revisit once the diagnostics toolbox takes scaling into account |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just out of curiosity, is this something that is being actively worked on? I think it'd be amazing if the toolbox could take scaling into account, but I recall Andrew mentioning that this was easier said than done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually straightforward to do now. We had issues when we had scale_model=False in the writer config, so it was unclear whether the user_scaling option was being used for IPOPT. Now we have scale_model=True, so we can always return the scaled version of the Jacobian, constraint residuals, and variable distance from bounds.
| model.material_flow_dx[idx], v, overwrite=overwrite | ||
| ) | ||
|
|
||
| # TODO elemental flows |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this be part of another PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
| v, | ||
| ) in model._enthalpy_flow.items(): # pylint: disable=protected-access | ||
| # Normalized domain, so scale should be the same as flow | ||
| # TODO is this correct? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this still need to be addressed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but now isn't the time to do it. I need to take some time to sit down with the advection equations and explore the conditioning of the solution, both in the case of a single ControlVolume1D and a counter-current unit model such as the HeatExchanger1D. The current scaling is adequate for many purposes.
|
|
||
|
|
||
| def approx(x): | ||
| return pytest.approx(x, rel=1e-15, abs=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tolerance is extremely tight, could there be failures in the future from this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we're trying to match scaling factors like 1/10 * 1/5 * 3/10. Floating point arithmetic is not associative, so (1/10 * 1/5)*(3/10) !=(1/10) * (1/5*3/10) . However, it will be within a relative tolerance of machine epsilon, which is 2.22e-16 for double precision arithmetic in Python. So no, there shouldn't be failures in the future from this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Summary/Motivation:
Scaler object for the mixer unit model. Depends on #1673.
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: